March 28, 2024

Future Thinking: Develop Your AI Intuition

Fingerspitzengefühl: A German loanword literally meaning “fingertips feeling.” 

It’s a term that AI trailblazer Noah Brier of Brxnd.ai cites in his conversation with Drew about the digital sixth sense today’s marketers must hone to master AI’s potential. 

In this episode, Noah unveils a roadmap for marketers eager to develop their AI intuition, cutting through barriers to adoption and offering practical steps to not just use AI, but to think alongside it. Tune in as we unpack how AI can elevate your B2B strategy, from distilling brand essence to crafting narratives that resonate in the digital marketplace.

Pro Tip: Noah shares a peek into Brxnd.ai’s AI-driven brand collaboration engine—Collabs—so you might want to watch the video version of this on our YouTube channel. 

The conversation comes ahead of Brxnd’s 2024 Marketing x AI Conference in NYC (May 8th, 2024), the leading AI marketing event that will sell out fast.  

What You’ll Learn 

  • How to build your AI intuition  
  • How to use AI to think big  
  • How AI can measure brand strength  

Renegade Marketers Unite, Episode 390 on YouTube

Resources Mentioned 

Highlights

  • [2:11] Noah Brier + Brxnd.ai 
  • [5:57] Barriers to AI adoption 
  • [13:19] Where is AI going? 
  • [17:12] AI prediction machine? 
  • [21:46] AI is not a computer 
  • [24:58] AI “manifesto generators”  
  • [27:42] How to build AI intuition  
  • [29:23] Collabs: Measuring brand strength 
  • [38:09] Where to start (3 steps) 
  • [48:32] Noah’s key AI takeaways

Highlighted Quotes  

Maybe we’re just thinking too much about the generative side of it all together. Rather than having AI speculate, what if it could ask us better questions to make us think better?” —Noah Brier, Founder of BrXnd.ai

“These large language models are bad at everything we expect computers to be good at.” —Noah Brier, Founder of BrXnd.ai

“AIs are pattern recognition machines, and in a way, you can conceive of brands as pattern generators. The best brands generate the most distinct patterns, they do it consistently, they do it over time, they’re recognizable. These tools are particularly well suited for each other.” —Noah Brier, Founder of BrXnd.ai 

“AIs are probably the single best tool for understanding mass perception—at least mass perception for the dataset, which is the internet—that we’ve ever had.” —Noah Brier, Founder of BrXnd.ai 

Full Transcript: Drew Neisser in conversation with Noah Brier

 

Drew: Hello, Renegade Marketers. I’m excited that you’re here to listen to another episode of Renegade Marketers Unite. This show is brought to you by CMO Huddles, the only marketing community dedicated to inspiring B2B greatness, and that donates 1% of revenue to the Global Penguin Society. Wait, what? Well, it turns out that B2B CMOs and penguins have more in common than you think. Both are highly curious and remarkable problem solvers. Both prevail in harsh environments by working together with peers. And just as a group of penguins is called a Huddle. Over 352 B2B CMOs come together and support each other via CMO Huddles. If you’re a B2B marketer who could share, care, and dare with the best of them, do yourself a favor and dive into CMO Huddles. We even have a free starter program, and of course, our robust Leader Program, neither of which requires a penguin’s hat. Thank goodness, join us. And before we get to the episode, let me do a quick shout out to the professionals that share your genius. We started working with them over a year ago to make this show even better and have been blown away by their strategic and executional prowess. If you’re thinking about starting a podcast or want to turbocharge your current show, be sure to talk to Rachel Downey at shareyourgenius.com and tell her Drew sent you.

Narrator: Welcome to Renegade Marketers Unite, possibly the best weekly podcast for CMOs and everyone else looking for innovative ways to transform their brand, drive demand, and just plain cut through, proving that B2B does not mean boring to business. Here’s your host and Chief Marketing Renegade, Drew Neisser.

Drew: Hello, Renegade Marketers! Welcome to Renegade Marketers Unite, the top-rated podcast for B2B CMOs and other marketing-obsessed individuals. Earlier this year, Noah Brier kindly shared his generative AI expertise with my fellow board members at the Urban Green Council. It was a fascinating conversation, one that I wish we had recorded. To amend that and go a bit deeper and actually include the slides that he would have presented, but he didn’t. We’re going to recreate that conversation and Noah will get to share some of the slides that he prepared. So if you’re listening to the podcast, I apologize. Probably ought to go watch it on YouTube. Alright, with that, Noah, first of all, for folks, if you don’t know Noah, there’s a really good chance you’re not in tech because he’s been a fixture in tech, having started several companies, including the one you see on the screen right here, which is Brand.AI, with that X in the middle. Noah, welcome back. You’ve been a guest many, many times on this show. And we’ll include your bio in our show notes. But so what the heck is Brand.AI?

Noah: That is an excellent question. I’m still trying to figure it out exactly. But it’s sort of a holding company for me to do lots of things related to marketing and AI. So I’ve been putting on events, I’ve been doing some consulting, I’ve been building lots of experiments. And they all sort of fit, however neatly, under this umbrella that I call Brand.AI, and I describe it as an organization at the intersection of marketing and AI. And I think that’s the best way to say it.

Drew: I love that. And you did mention conferences. And so let’s talk about how you have your second conference coming up on May 8, 2024. What’s going to be new, and what are you excited about for this conference?

Noah: Well, I’m excited to just catch up a year later. So you know, I did the first one last May. And it was really amazing. It was a day of bringing 200 plus marketers together and trying to focus the conversation on what’s possible and what’s real and sort of move away from the speculative and into the kind of reality. What I think is amazing about this technology is that it works, we don’t really need to speculate about what might happen because we have so much that we can play with and experiment with right now. And so I’d say sort of first and foremost, I’m just excited to build on that. I think probably one of my biggest surprises from the back half of last year is that brands were much slower to adopt this stuff than I would have guessed. I think that a lot of things stood in their way, whether it was kind of legal questions or whether it was process questions or whether it was IP questions. And so in some ways, I think we haven’t made that much progress as an industry in sort of understanding where to actually get going with this, and I hope to kind of push forward that conversation. And you know, I’d say overall my goal with everything I’ve been doing with Brand is just to try to push the industry to have more interesting conversations about this technology because I think that, unfortunately, we’re flooded with a lot of nonsense.

Drew: Well, a couple of notes. One, I loved the conference last year. I mean, the fact is we got to see folks creating ads and seeing how they did. And it was at that time very difficult to tell the human versus the bot-created and I imagine if you recreated that, it would be even harder this year. We had Toni Clayton-Hine from EY, which I had the pleasure of interviewing, we had a lawyer talking about it, we had lots of folks who were using these tools in really interesting ways. And I personally can’t wait to see where we’re going. I’m also excited that I get to interview Lauren Boyman from KPMG, who you will learn is just investing big time in generative AI and the uses of it, and their client number one for a massive investment. So anyway, that’s all very cool. So you’ve mentioned the adoption rate, and I think about it from the CMO Huddles community standpoint, and I know that 80% of our Huddlers community are using Gen AI, certainly for content. And what we’ve been pushing lately is just get beyond that. That’s fine. It’s great. But get beyond that. Think bigger about it. What can you build? What can you make? What can you do that will help really, truly differentiate your brand? Now, talk about adoption at this point in time. I’m thinking this year, there are going to be a lot more folks that are saying, “Okay, we’re all in.” Right? I mean, there are still some legal holdouts, you and I heard that from, you know, lawyers who are still saying – in-house counselors saying, “don’t touch it.”

Noah: For sure. And, you know, I think that some of the legal questions have been answered, some of them probably won’t be answered for years. I will have a lawyer again, coming to speak at the event and trying to kind of continue that conversation. That was one of my most popular sessions was the legal conversation. You know, I think the challenge that people have, though, is as much as sort of legal is a barrier, in some ways, it’s just sort of the simplest barrier to stop you. Right? You know, it’s a good excuse. I kind of think the bigger barrier is, it’s just strange. And it’s very hard, if you haven’t gotten your hands on it and really gotten your hands dirty, to sort of imagine where to begin. So I don’t think that problem just is magically going to get solved. And I’ve been talking and working with lots of folks who are trying to solve that, and it’s hard. Some of it’s just going to take time. Teams have to get comfortable with it. And like anything inside a large organization, you can’t just sort of mandate that everybody use it. And even if you can mandate that everybody use it, it’s not going to have the effect that you hope it has. And so some of it is just starting to feel those use cases and starting to feel those opportunities. And I certainly don’t have all the answers to where those are. And I would say my main message to brands over the last year has been just try and learn and don’t start big, start small. And don’t go hire some gigantic company to spend a year to build something for you. Get something built in a few days and start to get a feel for where this works and where it doesn’t. So I think those are the real barriers. And it’ll probably sort of meet in the middle, right? It’s going to take forward-thinking leaders who are getting their hands dirty and encouraging their teams. And then it’s going to take forward-thinking sort of individual contributors, who are personally excited about this, and are going to sort of bubble up the most interesting use cases throughout the organization.

Drew: Yeah, it’s so funny, I was talking to a CMO the other day, and they were struggling to get their writers to use this even at all. And I thought, you know, it reminded me of when purist photographers wouldn’t touch digital, or probably a much better parallel is when we talk to major marketers who didn’t want to do digital and wouldn’t give access to their employees. So it’s the same resistance. But you and I know that it will have to be broken down. And what’s amazing about this, all of this is there isn’t a single problem or question or thing other than math that I haven’t been able to do something I never could have done before because of this. I had to fix a DNS code. I don’t know what DNS codes are. I go to ChatGPT and say, “Hey, I need a DNS code for a DMARC. What is it? What’s it look like?” Now, why am I doing it because I’m a small business person, I have to solve quick problems sometimes. So it’s incredible. So anyway, I know you have a framework to think about this. And I think that would be really helpful for folks who may have been slow to adopt it. And even for folks who have adopted it one way or another, it’s just a great way of thinking about the business. So let’s do that.

Noah: Okay, cool. Yeah, let’s dive in. So, we’re gonna talk about AI obviously, and I think a lot of what I have been talking about and thinking about is in reaction to these sorts of articles I’ve been seeing which are very focused around a very narrow set of use cases within marketing. And not only that there are a narrower set of use cases, but I think it’s things like how to do better SEO tags or how it’s going to change content or the sort of scare tactics really, right? Like, when is it going to take your job? When is it gonna take your copywriter’s job? When is it going to take your creative director’s job? That is the whole conversation that’s happening. And it just feels sad. It’s so far away from the feelings that I’ve had or the worst of all is the one is like, when is it going to eat your brain? And there are certainly a lot of people who are very serious about AI safety stuff. And it’s worth thinking about. But for me and what you were just describing, it’s just been a very different experience for us, once I’ve sort of really dug in and just made this part of my world, I use it every day in code. And what’s interesting about the way I use it every day in code is not just that it is helping me to write code, but I start to notice that it changes the way that I write code. So I add extra comments so I can give the AI extra context so it can better help me, or I use it just day to day especially to do things like hating doing, like writing proposals and things like that, where it’s like, I would take hours to do not because it takes me a really long time, but because I would put it off wherever or you know, the thing you’re describing is one of my favorites, the DMARC example before. I think part of what it’s done is just given me a ton of confidence to solve new kinds of problems that I would feel stuck on. And DMARC is a sort of single example, but it’s like, I run into this with code where it’s like, “Oh, well, you know, I really want to build an iOS app, I’ve never built an iOS app.” But I can write enough code that if I can see code, I can understand it. And so then you just get yourself started, and you’ve got enough to get going. And that’s amazing or obviously making images. So this is actually from the conference last year, I did all of the image generation in Midjourney, and I’m not a designer, I really, if I could have one skill, it would be design, I wish I could do that, but I can’t, I’m terrible at it. And so use Photoshop, this is a Canva AI, magic eraser, right? And it just gets rid of things like this stuff is just incredible and empowering for someone like me, also a small business owner, to just be able to do an incredible amount of stuff. And the way I conceive of it is it’s this great accelerant. It’s like jet fuel for my creativity, it allows me to make anything I want. But it’s also massively confusing. And so I finally sort of put this chart together, where the more time I spent with AI, the less I feel confident in my ability to forecast where it’s all going to go. And that’s a really strange feeling, I think, for me and for many of us. I was just out this morning, talking to somebody who spent their career working in digital marketing. And we were just talking about how at the beginning of the 2010s, when I started Percolate, my last company, we used this chart, and the chart shows that by the end of the decade, everybody’s gonna have a mobile phone. And I don’t think we were geniuses for recognizing that. I think the iPhone came out in 2008, we all had iPhones, it was pretty obvious that the result of that was going to be that a lot more people were going to use social media and new kinds of ways. And it was going to change the way brands create content. And so I think it was just linear, like it was easy to see where this was gonna go. This is literally a linear chart. And it played out roughly this way. Now everybody in the world has a mobile phone. AI is way blurrier. And that’s like a very strange feeling.

Drew: Let me pause you there. Because I totally agree with this. But one of the things that I wondered is, can we use AI to speculate where this is going? Can we have it create some scenarios for us? So based on where it is today, and this, so forth.

Noah: So two answers to that. The first one is, yes, of course, like you can certainly do that. Although AI’s ability to speculate is going to be grounded in the reality that it understands. And the reality it understands is our reality and not the reality that it’s created. And so I would imagine it will have the same kind of difficulty speculating; I can speculate about it, too. I just don’t know that I’ll be right. I think the more interesting one, and something I’ve been circling around a bunch lately, is I think maybe we’re just thinking too much about the generative side of it altogether, that we’re too excited about its ability to generate and to do something like answer that question of what is going to happen. And instead, what if we had the AI ask us the kinds of questions we should be thinking about, so that we can start to answer those things? So rather than having it speculate, what if it could ask us better questions to make us think better? That’s just sort of a whole area that I have been really fascinated with exploring, is like the idea of AI as a sort of feedback mechanism. I was actually just talking to a friend of mine who’s working on a book. He asked if I could help him build a GPT that would interview him for his book to help him get some of his ideas out for his book, and I was like, that’s a perfect use case. That’s amazing. But what I just said to me is sort of amazing and interesting, which is like literally every day for the last few years I have been playing with this stuff. And it was only two weeks ago that I had this thought that like, maybe we’re too focused on the generative side. And maybe we should be focused on the sort of other side of it. And so it’s just hard, and maybe that’s okay until we get a better feel for it.

Drew: It’s not hard. It’s just exciting. And I’m going to rephrase it because it’s like, okay, think of generative AI, at least in one scenario, as you know, Socrates, the Socratic method, it’s just going to keep asking you questions. It’s funny, I’m watching a TV show and the lead cop is saying, “Wrong question, wrong question” to try to solve the case. Right? Well, that’s a really interesting thing, because it ought to be able to come up with great questions. Up to try it.

Noah: I totally agree with you. I think, on the same token, I do acknowledge there are a lot of people who, given a moment of great uncertainty, feel very uncomfortable. I happen to enjoy these moments, but it’s definitely a personal preference. It’s interesting, though, and this quote plays on imperfectly, which is Miles Brundage is at OpenAI doing policy work. And he had this tweet, this was from early last year, where he said, “The value of keeping up with AI these days is not that it gives you some amazing foresight to predict the future, it’s that you get really good at sniffing out other people’s overconfidence,” basically, you get really good at knowing when other people are full of it. And that to me, more than anything else, has captured my feeling, which is like, in all of the time that I’ve spent with it, I don’t feel really good about answering the question of like, what is this going to do? What’s it going to do to the industry? What’s it going to do to brands? I do feel really good about saying, “Hey, that person who is telling you with great certainty, here’s what it’s going to do to the industry is not a good person to listen to.”

Drew: It’s going to have an impact, it’s up to you to sort of use it to make an impact wherever you are. I love the thing, it’s not what it can do, It’s “Give it a challenge.” And I think the reason you’re so comfortable with it, and this gets to where you’re going with this, is you like being a creative person, you are a creative thinker. And so this is an opportunity to sort of bring creative thinking on top of something so crazy that could go in any direction. It can solve problems, if you know how to use it.

Noah: Yes. And to be clear, I absolutely think it will have an impact on the industry and everything else. I just don’t know exactly what that will be.

Drew: Well, we’re not here to predict. But we are here to help folks. Okay, real world, how do we use it? How do we think about these tools? And then how do we use it? So I love the way you think about this, and let’s talk about that.

Noah: So why is this so confusing? What’s going on? I think is a question I asked myself about a year ago, like why do I feel this way? Why am I having this weird feeling? I’ve spent a lot of my career as a strategist and CEO and an entrepreneur. And it’s like, born out of being able to feel confident about sort of my ability to understand what’s going on. And I think it’s because it’s just so fundamentally counterintuitive, and I don’t use the term in any kind of light sense, like it fundamentally runs counter to the intuition that I have about how the world works.

And I first started thinking of this as the word when I got access to build plugins for ChatGPT. Plugins were the precursor to GPT. So they were the ability for ChatGPT to talk out to other systems and to get data back. And one day I got access to build a ChatGPT plugin and I canceled all my meetings. And I was just like, “This is what I’m going to do today, I’m just going to try to figure it out and see what it looks like.” I’ve spent a lot of time building plugins and writing integrations and doing this kind of stuff, I’m very comfortable with it, what you do is you go to an API page and you read the API specs, right? And it could be Amazon, it could be Meta, they’re all the same, it doesn’t really matter. They tell you how to send them data, they tell you how they’re going to send your data back. If you don’t send them data the right way, if you don’t receive their data back in the right way, it doesn’t work. And that’s it, there’s nothing blurry about that, it’s very clear, it works exactly the way that they say it works. And if anything doesn’t happen in exactly that way, it’s just not going to function.

And I read the OpenAI, ChatGPT plugin specs. And it basically said for us, we don’t really have an API spec, “Just stick this file in the root directory of your application and in this file, describe to us how you want us to send you data and describe to us how you’re going to send data back to us. And we’ll do the rest.” That is weird. And I’m sure most of the people watching this have not actually written integration code. I’m sure they’ve worked on things that integrated with other systems. But this is the opposite of how the world works. This is like I have built my career on having a good intuition for how to tie these systems together. And one day I woke up and it works the opposite way from how I thought it did. I put this file in my directory and it all just worked and it works because they use AI as this fuzzy interface.

So the reason it can work, the reason they can do that, and no other software platform before it could, is because they just say, “Hey, however you send us data, we don’t really care, we’re going to have the AI figure out how to take your data and turn it into our format. And we’ll be all good.” It’s really crazy. And so it’s like, what’s actually happening underneath? Why is this possible? It’s all because it just makes predictions, right? The whole game here is it’s just a prediction engine. This is, I think, one of the biggest things people misunderstand about how these models work, is that when you put in “the quick brown fox,” or you ask a question, or you do anything, it’s just completing the sentence. It’s completing the prompt, it’s continuing it. And it does that by making predictions.

So if you put in “the quick brown fox,” it has a bunch of possible words that could go next. And this is a slightly oversimplified version, but it’s sort of good enough for the purposes of the vast majority of us who aren’t going to be building machine learning models in our basement or whatever. It’s just kind of going along. And it’s completing it and saying, it has learned by reading the entire internet, what the connections between ideas are, and it’s able to say, “Okay, the most probable next word is jumps.” It turns out, that’s like a pretty useful thing for lots of problems. So you can give it the text from an email and you can ask, “Is the next word spam or not spam?” And you can write a spam detector. Or you could give it a tweet, and you could ask it, “Is the next word positive, neutral, or negative?” And you can write a sentiment detector, and all it’s doing all the time, it’s just predicting, it’s predicting based on everything that it’s learned.

But that feels weird, because computers don’t work that way. Computers are deterministic machines. Even if you’ve never written any code, you have some understanding that what’s going on underneath is a bunch of code that says, “If this happens, then do this. And if that happens, then do that.” That’s a deterministic process. But AI is probabilistic. And when you run this probabilistic machine, things can get really weird.

So like, one of the examples that’s fun is, this is a screenshot, for a while though, ChatGPT won’t answer this question anymore. But if you said, “Give me a quote said by Noah Brier,” this says, “Ideas are like seeds, they need nurturing persistence in the right environment.” And then he said another one: “Curiosity…” I never said any of these things. These are not quotes from me. So they’re hallucinations. Technically, that’s what people call them. But they are things I talk about, like, there are things I might have said, it clearly knows who I am. And it clearly knows what I’m interested in, what I write about. And so it has clearly captured the right feeling. It just can’t quote me, because it’s just predicting one word after the other roughly.

Drew: Yeah, and for folks that have fun with this, ask ChatGPT to write your bio, it’ll get the essence right, but it’ll get the facts wrong.

Noah: Exactly. It clearly has a good sense. Basically, one way to conceive of these things is that these large language models are bad at everything we expect computers to be good at. The canonical example, and you already said this, is math. So this is like my favorite example. The top one is me asking my computer to multiply five numbers times six numbers, the bottom one is me asking GPT 4, so the best, most cutting-edge model. And at first, both answers look roughly the same. And then we started to look more closely. And what we realize is that the AI answer has the first four digits right, and the last three digits right, and the ones in the middle are wrong. And the reason for that is math is a deterministic process, you’re always going to get the same answer every time. And there’s no deterministic mathematical process being run by the AI. It’s just guessing based on everything it knows.

One way to conceive of this is, if I asked you to quickly multiply these two numbers, and I said you have three seconds, maybe the one thing you would do is say, “Okay, I know it’s in the billions or something.” I think that’s like a decent way to kind of conceive of it. But that’s super weird, right? Because like we are very used to and very comfortable with computers being able to do math very consistently, much more quickly and much more consistently than we can.

One thing that people have run into, and before we were talking about sort of the adoption challenges, there’s a lot of people who even if they haven’t asked to do math, they’ve gotten ChatGPT. They’ve said, “Do a thing computers are good at,” and it comes back with a bad answer. And they’re like, “Well, this thing isn’t very good.” And my sort of response to that is, “Try to rewind yourself as hard as it is to do this to five years ago, you had no idea about AI working in this way. And imagine I came up to you on the street and I was like, ‘Hey Drew, open your computer and have it write me a sonnet.'” And you’d just be like, “This is a crazy person.” The idea of you asking your computer to write you a sonnet is a ridiculous thing to say, but now it’s just like a part of our reality. Like now you just go open GPT and if I said, “Drew, have your computer read me a sonnet,” in five seconds later your computer will have written me a sonnet. Where it basically leads us is to this place where everything we expect computers to be good at, these are bad at, and everything we expect them to be bad at, these are good at. And so you know, on the flip side of the math, one of my favorite examples; So I, in over the last couple of years, have been doing lots of weird experiments at this intersection of marketing AI. One of them was I built a manifesto generator. And so it was just some AI, I worked with a creative director friend of mine who has written many manifestos, and we worked on this thing and does it write the best manifesto in the world? No, but then like a week later, someone sent me this video. This is the CMO of Rare Beauty on stage at Adweek, reading a manifesto generated by AI by this machine I built.

Video: So in the year 2020, when screens became mirrors, and lakes became our reflection, Rare Beauty emerged. A fearless champion in the arena of authenticity. We reject the digital masquerade of flawlessness we dismantle the filters that blur our true selves. We touch shadows on the illusion of perfection. We believe in beauty, untamed, unfiltered, and unequivocally real. We believe in the power of every freckle, scar, and asymmetry. We believe in the radical notion that beauty should not conform. We call on the bold, the spirited, the original, Dare to bare your rare. Dare to defy the standard. Join us in a revolution where every selfie is a statement. Here’s to the rare ones, to the real, real beauty waiting to break free. Shatter the glass of false ideals. Let beauty, your beauty define the world.

Noah: So is that the best manifesto ever written? Obviously not. But is it pretty wild that a computer wrote that manifesto? Obviously. And that’s just not something that’s the equivalent of asking your computer to write a sonnet or again, I built this collabs experiment. Are these the best collabs? No, like, if you look closely, all the shoelaces are weird. But these came out of a computer like it’s insane to even imagine. And I think where it leads us is this thing where we look at it, we say, Hey, this is weird. And part of I think why everybody feels so uncomfortable. And part of why it feels like such a crazy moment is because we’ve all spent 30-40 years building up a lot of intuition for how computers work. And essentially, we now have this thing that doesn’t work anything like that, but it looks like a computer. And so that to me is sort of the core of the problem. And so then it gets us into what do we do about

Drew: Well, before you go on, I just love this chart, because it really lays it out so clearly. Computers –  good at math. Creative writing – AI. Information storage – computers. Pattern recognition – AI. And by the way, gap analysis is such an interesting use case and which is really hard for humans, impossible for computers, multitasking general purpose programs. Yeah, so fascinating. So thank you for that. I love this chart. Okay, next.

Noah: So what do we do about it? How do we get practical, I think we just need to build intuition. And when I think about intuition, I like sports. I like Formula One, this is Ayrton Senna driving Monaco, in 1990. Like this is a person driving 160 miles an hour around some streets that are essentially tiny one-way, it’s bonkers. And he clearly has an unbelievable amount of intuition for his car for the track for everything else. If you’re a sports fan, this rings true across all sports, this is what athletes have is this perfect intuition for their body. And I think when you think about sort of what we do professionally, a lot of those of us who have sort of become leaders in organizations, it’s that we’ve been able to build that kind of intuition for our industry for our work. You know, it’s like if I asked you whether a copy line was good or not, you have a career of intuition to know the answer to that question, you could answer the question more quickly than you could tell me why it’s good. Because that’s what intuition looks like. And I think that is what we all need to do. There’s a German word for it. We could choose fingerspitzgefühl, which basically means we all need to just build fingertip feeling for this. And I think part of what’s so hard. And part of what is driving this sort of adoption barrier, if you will, is that if you look at this, and you say this looks like a computer that can do more stuff. How do we solve the same kinds of problems we’ve been solving, you’re just going to run into a wall and that’s why you have to get comfortable with it. You have to get a feeling for what it’s good at, what it’s not good at. And then you start to see the possibilities and you start to see the kinds of impacts that it can have.

Drew: Fingerspitzgefühl, that just rolls right off of your tongue. Okay, so let’s keep going.

Noah: Yeah, so what I’ve been doing is just playing I’ve been building a lot. Becca is something I’ll talk about, something we worked on together, a bot that looked at some of the transcripts and stuff from your podcast. Collabs is something I’m going to talk about in a little more depth, but I’ve just been trying to build as much as possible. I’ve had the opportunity to do that. Collabs is one of the first experiments I did and it’s really the thing that bit me. It’s the reason I fell down the rabbit hole and what it is, is still up, people can go to my website brxnd.ai And they can play with this, they can make their own collabs. So you just go in you choose two brands you choose a product and it generates a collab and some of them are amazing. Some of them are not amazing. The process that goes through is kind of interesting though. So the first thing is I fine-tuned a model. I taught it to write marketing copy by showing it 500 examples of collab announcements. And so we’ve got really good examples to learn off. So first I write the copy, then from another AI – extracts a description, imagines from that copy, what might it look like, and then another one writes the prompt, and then it generates an image. And so this is my 11-year-old niece asked to see what Nike Hello Kitty Dunks would look like, or a Patagonia Froot Loops backpack, or one of my all-time favorites is the Grateful Dead Air Mask scarf, there is a whole side that I am very interested in exploring, which is the connection between the Grateful Dead branding and hallucinations. But that’s for another conversation we have in the future. But ultimately, what this led me to a bunch of real observations, and this is where I started to build up my own fingerspitzgefühl. And the first one, which sounds super trite was just that, like good brands make better images. And that’s something I started to notice, as I looked at more and more of these is that the better the brand was, the better the image that came out, right. So like Lego and Carhartt are pretty iconic brands. And it was able to create a really great image, like that’s funny and interesting. And that’s real, right? Like a Carhartt jacket for your Lego guy is a funny idea. And while it sounds trite that of course good brands make better images. What’s interesting is these things start from nothing. So you start from a system that knows nothing about the world, you feed it a ton of images. And it seems to have come to the same conclusions that you and I would about what the best brands are. And I spent a lot of time poring over that because that’s crazy. They learned what takes us a career to learn in whatever, a month of training. And I think the reason is that these are pattern recognition machines, like what they do when they’re doing their machine learning is they are building up all these connections in their neural network, between different sort of ideas and concepts. And I think in a way you can conceive of brands as being pattern generators, that’s what you do when you build a brand is you generate patterns. And the best brands generate the most distinct patterns. They do it consistently. They do it over time, they’re recognizable. They’re all these things. And so I think that’s what it is. I also think that’s part of why it makes marketing such an interesting space to be thinking about these tools, because I do think they’re particularly well suited for each other.

Drew: Let me ask you a question. Is this as sort of test of a brand strength? I mean, you know, you have a brand when it can do a good collab?

Noah: I think so there’s a lot of people playing with this for sort of research purposes and trying to do brand trackers and whatever. I think there’s a lot of challenges with that just about the way these models are trained. But fundamentally, yes, there’ll be my argument is like, they have quantified a thing we all know to be true. And it’s very hard to get it out as numbers, you know it when you see it thing like it is very clear, I could show you 100 images. And if you were to stack rank the images, the best brands would rise to the top without question.

Drew: It is amazing. And it goes back to something I was talking to Liza Adams, who is using this for strategic development, and she will go in and scrape G2 reviews as a way of getting to know customers. Now, some might argue that’s good data or bad data. Depending if you’re talking to the CMO of G2, they’ll say it’s great data, if you’re talking to one at Trust Radius, they may have a different story. Nonetheless, the key point I wanted to make is, you need for these things to work if you’re training it, which is essentially good data. And good brands have good data because they have images out there. Essentially, it’s a visual version of data. Right?

Noah: Well, and to that point, very specifically, one of my favorite experiments with this was to do the sort of relative strength one so I keep going back to Aeron as scarves, because Aeron has such a strong brand that every image that came out was amazing. Like every scarf you asked Iit to do, it would just always look amazing. And then I started playing with like, “Okay, what if I combine it with different brands?” Sometimes it was very obvious that the brand like that Simpsons one might not be a perfect image. But there’s no question. That’s a Simpsons scarf, right? The one on the left, though, is the Financial Times and like, why does that not pop? Like if I didn’t have the label on that? You wouldn’t have any idea that was the prompt for it. Why isn’t it pink is a very interesting question that a lot of people have asked, but this is exactly what you’re saying is like, I think one of the things I started to see is like it has such a clear conception for what an Aeron scarf looks like that if you combined it with another brand that didn’t have equal strength. It just overloaded it. Somewhere in here. It has a really good sense. 

Drew: We’re talking about visual brand strength which is an aspect of brand strength, but it’s certainly the one that in this experiment is showing clearly.

Noah: Yes. We’re talking about visual brand strength. It ties into other kinds of brand strength, I would argue, because even these models are still sort of relying on text for other parts of it like they need to interpret the prompts and everything else. But yes, fundamentally, this particular experiment was very focused in that area. It was able to pick up on things like art direction, which I just thought was interesting. Like somebody has pointed out, there’s not like a perfect Supreme, it’s much more of the Polo Ralph Lauren colorway than the Supreme colorway. But as far as art direction for how this shot might be from Supreme, like, it’s pretty good. Another thing, and this is to something you said earlier, one response people have is like, “Oh, well, it’s good at reproducing images from the best brands because the best brands have the most stuff out there. They’re the biggest. So it’s like, of course, it’s good at Coca-Cola because Coca-Cola has the most images because it’s the biggest brand.” And I certainly feel for that particular argument because I do think a lot of the big brand studies like when they do the brand rankings, they’re often fundamentally, I think, showing you more just the general size of the brands, then the general strength of the brands like Google and Coca-Cola show up at the top, they’re gigantic. But this, there are a lot of much smaller brands like the left one is ALD, which is a small fashion brand out of New York. The right one is Palace Skateboards and Ghostly. Ghostly is a friend of mine’s record label, and they’ve done an amazing job, and like, it totally has picked up on the aesthetic. And that’s interesting and amazing. And then the final one, and this is my very favorite example from all of these, and I think the most telling and maybe the single most important thing for us to keep in mind as marketers, is these images, so especially the one on the right. Neither of these are Nikes; they weren’t prompted as Nikes. The one on the left is Porsche Doritos sneakers. The one on the right is Dunkin Saucony sneakers. So technically, these are hallucinations. The AI has hallucinated this to be a Nike. My argument would be that while this is technically a hallucination, it is not perceptually a hallucination that if we were to go ask on the street, a bunch of people to draw a Dunkin Saucony sneaker from memory, they would put a swoosh on it because too many people, I know this to be true. Sneaker means Nike. And so that is really important to us as marketers because that’s the world we live in. We live in a world where perceptual reality is more important than technical reality. If a consumer, potential customer, or prospect thinks that your competitor has a better feature set than you, but you actually have a better feature set than them, that means you have lost that. Like they see the competitor, and you need to convince them that they are wrong. And that is hard to do. Perception is reality in our business. It just is; that’s the world that we live in. And I think that these are probably the sort of single best tool for understanding sort of mass perception, at least mass perception for the dataset, which is the internet that we’ve ever had. So let’s get practical, beyond just playing with it. How do you play with it? What’s the best way to sort of approach this? I do not consider myself the expert on any of this because I think it’s very hard to be an expert. But this is my own personal thoughts and opinions. First piece is like, don’t overthink it. There’s a lot of stuff out there. There’s a lot of tools focused on marketers. Just start with ChatGPT. GPT-4 is the best model that exists by far still now. This may not be true in a month, but as of today, it’s the best model that exists. And it has everything. So just start there, start getting comfortable by playing with ChatGPT.

Drew: I’m so into that because while you can use multiple models, and like sometimes it makes sense to go back and forth between ChatGPT and say, Claude, when you’re doing some kind of analysis, it’s just, we don’t have time to learn five different applications. And it’s just easier right now because you’ve got this really good one, and it keeps adding stuff to it. I know they’re just debuting now a video thing that I saw a demonstration of, this crazy good. So my guess is it’s only going to get better. Okay, step two.

Noah: Yeah, also, I just think fundamentally, to sort of reinforce the thing I said earlier, like, the biggest thing you’re gonna get out of these tools right now is your own intuition for how they work. So if that’s the goal, and you’re just trying to sort of understand as much as you can, why look around, just use the best model. It’s pretty simple. The second is, and this is more of something to understand, but if you’ve used ChatGPT, part of what’s amazing about it is that it combines the AI with all this stuff that’s not AI. So at the beginning, I was talking about determinism versus non-determinism. And so you know, if you’ve used ChatGPT with search with whatever they used to call it, browse with Bing, now they call it web browser. This is combining those two things because the model doesn’t have knowledge after a certain date. If you ask it certain kinds of questions, it will go out to the web, it’ll search the web, and it will pull that in. So it’s combining the determinism and the non-determinism. And this is sort of the real future, it’s like these models are not going to exist in a vacuum. The hallucinations can be a real problem at times, but when you start to combine them and put them in and have them work together, it’s just kind of amazing. The other one, if you haven’t played with this, this is the sort of most mind-blowing thing when you have it with code. So this is code interpreter, whatever they call it now. And you can give it an Excel file. And I just said these things can’t do math, but what they can do is they can come back, and they can write code, and the code can do math. And so because writing code is a non-deterministic process, it runs the code that does the math. That’s crazy, and it’s amazing. And again, it’s combining these things is where the real power is. And then the last one, and we don’t have time to go into the sort of deep details of how this works specifically, but when you combine it with this thing called embeddings and vectorization, you can get really amazing results. This is an experiment Drew and I did together where we took a bunch of data from conversations, we pulled those out as transcripts. And then it’s able to go back and look at these specific quotes from CMOs in Huddles about a specific topic. And what’s powerful here, again, we’ve talked about hallucinations, they can make stuff up, you talked about the bio thing where it makes a piece of your bio, but this, it’s answering the question, it’s sort of building a summary of the answers, but you’re able to click a button and then go see the real verbatims. And so you can sort of ground it in the real truth. And I think this is a really powerful technique. This has come to be called RAG, retrieval augmented generation. This is probably the biggest enterprise use case that it made the first splash because it’s such an obvious thing, right? Like, take all of your documentation and make it searchable and chattable, right? And grounded back to the original doc.

Drew: It just reminded me of a couple of things. One, since we built this, we’ve got another 900 pages of content to share to keep informing it. And so that these can be living, breathing, I guess you would call this a GPT at this point in time, right?

Noah: Yeah, so ChatGPT has called these things GPTs. They recently got turned down for a trademark on that. I would not call it that because I think that’s pretty technically inaccurate way to describe it. But I think the general industry term is RAG, so retrieval augmented generation. So you’re retrieving from another database, and then you’re augmenting the AI generation with your first-party data.

Drew: I love it. Of course, we’ll include in the link to this to Becca in the show notes. But cool.

Noah: So step three, don’t go crazy about prompting. I earlier this year did a big project about prompting and spent a lot of time reading research papers. And you know, I’m sure anybody who’s been on LinkedIn has seen the like somebody’s posting, “Here’s the 16 prompts you can’t live without,” and whatever, whatever. To me, I think this is all you really need to know about prompting is just these three things. So the first one is called zero-shot, zero-shot prompting is when you just ask a question. You’re in ChatGPT, you just talked to it, you’re just asking something, it gives you an answer back, 90% of the time, that’s certainly fine. Few-shot is where you give it examples along with your prompt. Few-shot is really powerful when you want to do something like, “Hey, I want it to write a headline in my style.” You give it a couple of headlines in your style, and then you give it the copy from the article, and it writes a headline in your style. Few-shot is just those examples. And then chain of thought is super interesting, and one that I think like we’re just beginning to really understand and think about, which is this came out of Google in 2022. A Google research paper, Chain of Thought, they found that one of the ways they test these models is they have them do a bunch of standardized tests. So you know, the standardized test question, it’s like, “Noah has six jelly beans, and Drew has eight jelly beans, and Noah gives Drew two jelly beans, how many jelly beans does Drew have?” And just said like these things can’t do math, they don’t do math. And while that’s not a complicated math question, it gets tripped up, and it doesn’t answer the question really well. They found that if you give the AI a few examples of those kinds of questions, and you also give the AI the answer, but between the answer and the example, you explain your thought process for getting to the answer. So you say, “Okay, well, we start with understanding who has how many, so Drew has this many, and Noah has this many, then we figure out who is subtracting and who is adding, so in this case, Noah is giving, so he’s subtracting, and Drew is adding. And so then we add them together at the end, and we get to eight or 10.” But you explain that thought process, and you do that a few times. And then the AI is able to approximate that thought process. And it comes up with quantitatively better results. And there’s actually a paper at Microsoft in December, where they built on Chain of Thought, and they showed that they had to take a medical standardized test. And they found that a general-purpose AI like GPT-4 was able to outperform a specially trained medical AI if given a good enough chain of thought plus prompt. One takeaway just to hit on this point again is like, as you’re thinking sort of internally, you know, as a CMO, how do I start to sort of really take advantage of this inside my organization? I think part of what you have to do in your organization is you have to find those few shots, like what are the best examples of everything we do? And you should have that, the five best examples of headlines, of articles, of whatever emails, every single thing, you need to have those ready to go for your team. And then also, eventually, the thought process. So you take the best person at writing emails that convert, and you have them write how they write emails that convert, and that I think is a place to start. I think it’s good to start collecting that kind of information inside the organization.

Drew: Yeah, I’m so with you. And I was thinking it’s funny, there’s a problem scenario of a situation where you know, a particular target, you’ve got to know this target and you’ve also realized, it’s not working with that target, you’re not converting that particular target. And you can sort of include the examples of the conversations. And then in theory, you could say, “Where are we going wrong?” And at least it’ll start to explore that with you and challenge it. And I’m imagining, and so this is why you know, that the companies, for example, like a Gong, where they’re recording these conversations, once you start to input success data at the other end, they’re going to build the AI in there to help analyze those conversations, because the thought process is, you’re essentially giving it really good information that it can train on, and then help you find the exact answers.

Noah: So then finally, I saw this meme recently, and I just thought it was pretty hilarious. There’s a bunch of people in the middle who are, you know, saying, “Oh, you know, here’s all the sort of 73 tips you have for prompting, and here’s how to do it perfect and offer a tip and do this and do that.” The reality is 99% of the time, you can just talk to it, and you’ll figure it out. And it’s really good. And there was an announcement yesterday, that Google’s next model is going to have some insane context length. And it’ll have tons of background from your conversation. So it’s just like they’re getting better and better. There are times when these specific techniques are really useful. But the vast majority of the time, you just play with it.

Drew: And I think this is for the folks that are new to these tools. I think your light bulb goes off and you say, “Well, I don’t know how to use it.” Well, ask it what prompts should I use to get you to do this or help me solve this problem. And then you could ask it again. And you could just keep iterating with it to get to a place where you go, “Oh, hadn’t thought of that?”

Noah: Yeah, I would also just say, especially for folks who are sort of just trying to wrap their head around it, one of my favorite category of prompts is just generally giving you feedback. You know, anytime I’m writing something that is important and serious, I asked it to tell me whether I’ve been clear enough. And does it seem like I understand this topic well enough? Or am I doing the thing where when you don’t understand something, you just start talking? And just those kinds of prompts I find to be amazingly helpful, because like, you may not turn to somebody else to be your reader on an email, that’s really important, because you’re like, “I’m not going to bother somebody to have them read this email before it goes out.” But it’s really nice to just have somebody to collaborate with in that particular environment. Takeaways, let’s do it. So first one, we should conceive of them as concept retrieval systems, not fact retrieval systems. This is a super important. We showed that chart at the beginning, one of the things that it said computers are good at is information storage and retrieval. And AI is bad at information storage and retrieval. AI is really good though, at concept retrieval. The funniest example of this is you can ask the AI, “What’s the weather in LA today,” and it’ll say “68 and sunny,” not because you check the weather, because like, it’s really good and understanding the general vibe of things, right. And that is probably the weather or close enough. They’re vibe machines and intersecting with brand stuff. Brands are vibes, brands are feelings that we all have. And the stronger the brand is, the stronger the feelings that you’ve created across a set of customers and prospects and all of these people. And again, I think sort of finding ways to link those together. Hallucinations are a feature, not a bug. I think this is pretty straightforward. Like there was a great tweet recently from Andrej Karpathy who ran AI at Tesla for a while he was at OpenAI. And he basically said like all of these people trying to say “How do we eliminate hallucinations in large language models?” Is an absurd thing to say they are just hallucinating all the time. Like they don’t have a database. We shouldn’t be amazed that they are wrong about that one part of your bio. We shouldn’t be amazed that the other 99% is right. This is incredible. Well, they’re essentially this sort of amazing way of compressing what is effectively like all of the knowledge ever produced in writing by humans, and so they’re often technically incorrect, but perceptually correct. This is the sort of Nike example. And I think, again, as marketers doing research, I think it’s really important to keep this in mind. I think the way to conceive of this particular point is like, one of the things AI wants to do is kind of average things out, it wants to give you the sort of average version of a manifesto, or the average version of a copy line, or the average version of an essay or whatever. It’s just important to keep in mind when what you want is average. There are times, hey, I’m just trying to learn about something, the average answer is totally fine. But then when you want to go above average, you know,  a hard-hitting headline, whatever it is, like, you just need to sort of be aware of that and understand it. And then the last one is just these things are confusing. And that’s cool. Don’t let anyone tell you anything different. There are a lot of people out there trying to convince you otherwise. But just ask questions, tinker. And then finally, don’t trust anyone that sounds too confident. Those are all my big takeaways.

Drew: Well ,that and along with hurry and get yourself a ticket to the BrandX.AI conference before it sells out. Right?

Noah: BrXnd.AI. You can buy yourself a ticket. It’ll be in New York City, May 8th, 2024. It should be a great day. Last year was a great day.

Drew: It was a great day. Noah Brier, amazing as always, lots of rich insights, lots of food for thought, so many different things. I know that if you’ve listened to this and you’ve stuck with us, your next stop is going to be ChatGPT and you’re just going to start asking questions, so very cool. Noah, thank you so much for sharing this. 

Noah: Thanks for having me.

Drew: For more interviews with innovative marketers, visit renegade.com/podcasts and hit that subscribe button.

Show Credits

Renegade Marketers Unite is written and directed by Drew Neisser. Hey, that’s me! This show is produced by Melissa Caffrey, Laura Parkyn, Ishar Cuevas, and our B2B podcast partners Share Your Genius. The music is by the amazing Burns Twins and the intro Voice Over is Linda Cornelius. To find the transcripts of all episodes, suggest future guests, or learn more about B2B branding, CMO Huddles, or my CMO coaching service, check out renegade.com. I’m your host, Drew Neisser. And until next time, keep those Renegade thinking caps on and strong!